127 research outputs found

    Resilience Grammar: A Value Sensitive Design Method for Resilience Thinking

    Get PDF
    The resilience grammar is a method for bringing a value sensitive design sensibility to resilience thinking. The method provides a systematic process for researchers, designers, and policymakers to identify and trace resilience pathways in the context of real world responses to stressors and obstacles. The grammar is composed of seven statement types, which bring forward aspects of resilience. Each statement type is composed of a connecting phrase and an element, in the form of “resilience connecting-phrase .” In this report, we define each statement type in the resilience grammar, provide two brief illustrations of the grammar in action, and conclude with six suggestions for use. Taken together, the resilience grammar enables the expression and integration of diverse stakeholders, values, value tensions, and worldview into an account of resilience thinking.https://digitalcommons.law.uw.edu/techlab/1019/thumbnail.jp

    Diverse Voices: A How-To Guide for Facilitating Inclusiveness in Tech Policy

    Get PDF
    The importance of creating inclusive policy cannot be overstated. In response to this challenge, the UW Tech Policy Lab (TPL) developed the Diverse Voices method in 2015. The method uses short, targeted conversations about emerging technology with “experiential experts” from under-represented groups to provide feedback on draft tech policy documents. This process works to increase the likelihood that the language in the finalized tech policy document addresses the perspectives and circumstances of broader groups of people— ideally averting injustice and exclusion.https://digitalcommons.law.uw.edu/techlab/1022/thumbnail.jp

    Multi-Lifespan Information System Design

    Get PDF
    Contemporary information ecosystems evolve at lightening speed. Last year’s cutting edge innovations are this year’s standard fare and next year’s relics. An information innovation can be implemented, made available through the Internet, and appropriated within 24 hours. Yet, significant societal problems engage much longer timeframes. In 2010 Friedman and Nathan pointed to a fundamental disconnect between mainstream design thinking and these longer-term problems. To address this disconnect, they proposed a multi-lifespan information system design framing. This workshop builds on previous work by the organizers and others to: (1) elaborate and identify new opportunities and challenges in taking up multi-lifespan information system design problems, and (2) generate critical and constructive discussions for further development of multi- lifespan information system design thinking.

    HCI for peace: from idealism to concrete steps

    Get PDF
    This panel will contribute diverse perspectives on the use of computer technology to promote peace and prevent armed conflict. These perspectives include: the use of social media to promote democracy and citizen participation, the role of computers in helping people communicate across division lines in zones of conflict, how persuasive technology can promote peace, and how interaction design can play a role in post-conflict reconciliation

    Charting the Next Decade for Value Sensitive Design

    Get PDF
    In the 2010’s it is widely recognized by computer and information scientists, social scientists, designers, and philosophers of technology that the design of information systems is not value neutral [5-8,11]. Rather, such systems are value laden in part because societal values are major factors in shaping systems, and at the same time the design of the technology reinforces, restructures or uproots societal value structures. Of the many theories and methods to design for this phenomenon one continues to gain traction for its systematic and overarching consideration of values in the design process: Value Sensitive Design (VSD) [5-7]. The aim of this multidisciplinary workshop is to bring together scholars and practitioners interested in ways values can be made to bear upon design and to help continue to build a community by sharing experiences, insights, and criticism

    Augmented Reality: A Technology and Policy Primer

    Get PDF
    The vision for AR dates back at least until the 1960s with the work of Ivan Sutherland. In a way, AR represents a natural evolution of information communication technology. Our phones, cars, and other devices are increasingly reactive to the world around us. But AR also represents a serious departure from the way people have perceived data for most of human history: a Neolithic cave painting or book operates like a laptop insofar as each presents information to the user in a way that is external to her and separate from her present reality. By contrast, AR begins to collapse millennia of distinction between display and environment. Today, a number of companies are investing heavily in AR and beginning to deploy consumer-facing devices and applications. These systems have the potential to deliver enormous value, including to populations with limited physical or other resources. Applications include hands-free instruction and training, language translation, obstacle avoidance, advertising, gaming, museum tours, and much more. AR also presents novel or acute challenges for technologists and policymakers, including privacy, distraction, and discrimination. This whitepaper—which grows out of research conducted across three units through the University of Washington’s interdisciplinary Tech Policy Lab—is aimed at identifying some of the major legal and policy issues AR may present as a novel technology, and outlines some conditional recommendations to help address those issues. Our key findings include: 1. AR exists in a variety of configurations, but in general, AR is a mobile or embedded technology that senses, processes, and outputs data in real-time, recognizes and tracks real-world objects, and provides contextual information by supplementing or replacing human senses. 2. AR systems will raise legal and policy issues in roughly two categories: collection and display. Issues tend to include privacy, free speech, and intellectual property as well as novel forms of distraction and discrimination. 3. We recommend that policymakers—broadly defined—engage in diverse stakeholder analysis, threat modeling, and risk assessment processes. We recommend that they pay particular attention to: a) the fact that adversaries succeed when systems fail to anticipate behaviors; and that, b) not all stakeholders experience AR the same way. 4. Architectural/design decisions—such as whether AR systems are open or closed, whether data is ephemeral or stored, where data is processed, and so on—will each have policy consequences that vary by stakeholder.https://digitalcommons.law.uw.edu/techlab/1000/thumbnail.jp

    From Ancient Contemplative Practice to the App Store: Designing a Digital Container for Mindfulness

    Full text link
    Hundreds of popular mobile apps today market their ties to mindfulness. What activities do these apps support and what benefits do they claim? How do mindfulness teachers, as domain experts, view these apps? We first conduct an exploratory review of 370 mindfulness-related apps on Google Play, finding that mindfulness is presented primarily as a tool for relaxation and stress reduction. We then interviewed 15 U.S. mindfulness teachers from the therapeutic, Buddhist, and Yogic traditions about their perspectives on these apps. Teachers expressed concern that apps that introduce mindfulness only as a tool for relaxation neglect its full potential. We draw upon the experiences of these teachers to suggest design implications for linking mindfulness with further contemplative practices like the cultivation of compassion. Our findings speak to the importance of coherence in design: that the metaphors and mechanisms of a technology align with the underlying principles it follows.Comment: 10 pages (excluding references), 4 figures. To appear in the Proceedings of DIS '20: Designing Interactive Systems Conference 202

    Towards Value-Sensitive Learning Analytics Design

    Full text link
    To support ethical considerations and system integrity in learning analytics, this paper introduces two cases of applying the Value Sensitive Design methodology to learning analytics design. The first study applied two methods of Value Sensitive Design, namely stakeholder analysis and value analysis, to a conceptual investigation of an existing learning analytics tool. This investigation uncovered a number of values and value tensions, leading to design trade-offs to be considered in future tool refinements. The second study holistically applied Value Sensitive Design to the design of a recommendation system for the Wikipedia WikiProjects. To proactively consider values among stakeholders, we derived a multi-stage design process that included literature analysis, empirical investigations, prototype development, community engagement, iterative testing and refinement, and continuous evaluation. By reporting on these two cases, this paper responds to a need of practical means to support ethical considerations and human values in learning analytics systems. These two cases demonstrate that Value Sensitive Design could be a viable approach for balancing a wide range of human values, which tend to encompass and surpass ethical issues, in learning analytics design.Comment: The 9th International Learning Analytics & Knowledge Conference (LAK19

    Algorithms, governance, and governmentality:on governing academic writing

    Get PDF
    Algorithms, or rather algorithmic actions, are seen as problematic because they are inscrutable, automatic, and subsumed in the flow of daily practices. Yet, they are also seen to be playing an important role in organizing opportunities, enacting certain categories, and doing what David Lyon calls ‘‘social sorting.’’ Thus, there is a general concern that this increasingly prevalent mode of ordering and organizing should be governed more explicitly. Some have argued for more transparency and openness, others have argued for more democratic or value-centered design of such actors. In this article, we argue that governing practices—of, and through algorithmic actors—are best understood in terms of what Foucault calls governmentality. Governmentality allows us to consider the performative nature of these governing practices. They allow us to show how practice becomes problematized, how calculative practices are enacted as technologies of governance, how such calculative practices produce domains of knowledge and expertise, and finally, how such domains of knowledge become internalized in order to enact self-governing subjects. In other words, it allows us to show the mutually constitutive nature of problems, domains of knowledge, and subjectivities enacted through governing practices. In order to demonstrate this, we present attempts to govern academic writing with a specific focus on the algorithmic action of Turnitin
    • 

    corecore